local lipschitz
Appendix
The third entry varies under perturbation. Wecan compute the local indicator matrices atthis layer accordingly. We inherit the notations from the main text, and useIL to denote 13 theindicator matrixforlinearReLUoutputs. The key observation from this approach is that we can "merge" the weight matrices together for linearneurons(thefirstterminEq(19)).ThenwehavekW3D2LW2D1LW1k kW3kkW2kkW1k. Consider a neural network that maps inputx to output z = F(x), where z RN.
- Asia > Middle East > Jordan (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
Control with Patterns Based on D-learning
Nowadays, data are richly accessible to accumulate, and the increasingly powerful computing capability offers reasonable ease of handling big data. This remarkable scenario leads to a new way of solving some control problems that were previously challenging to analyze and solve. This paper proposes a new control approach, namely control with patterns (CWP), to handle data sets corresponding to nonlinear dynamical systems, where the feature abstraction must be considered for unstructured data feedback. For data sets of this kind, a new definition, namely exponential attraction on data sets, is proposed to describe nonlinear dynamical systems under consideration. Based on the data sets and parameterized Lyapunov functions, the problem for exponential attraction on data sets is converted to a pattern classification one. Furthermore, D-learning is proposed to perform CWP without knowledge of the system dynamics.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- (4 more...)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
Guaranteed Nonlinear Tracking in the Presence of DNN-Learned Dynamics With Contraction Metrics and Disturbance Estimation
Zhao, Pan, Guo, Ziyao, Gahlawat, Aditya, Kang, Hyungsoo, Hovakimyan, Naira
This paper presents an approach to trajectory-centric learning control based on contraction metrics and disturbance estimation for nonlinear systems subject to matched uncertainties. The approach uses deep neural networks to learn uncertain dynamics while still providing guarantees of transient tracking performance throughout the learning phase. Within the proposed approach, a disturbance estimation law is adopted to estimate the pointwise value of the uncertainty, with pre-computable estimation error bounds (EEBs). The learned dynamics, the estimated disturbances, and the EEBs are then incorporated in a robust Riemann energy condition to compute the control law that guarantees exponential convergence of actual trajectories to desired ones throughout the learning phase, even when the learned model is poor. On the other hand, with improved accuracy, the learned model can help improve the robustness of the tracking controller, e.g., against input delays, and can be incorporated to plan better trajectories with improved performance, e.g., lower energy consumption and shorter travel time.The proposed framework is validated on a planar quadrotor example.
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > United States > New York > Nassau County > Mineola (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (2 more...)
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds
Huang, Yujia, Zhang, Huan, Shi, Yuanyuan, Kolter, J Zico, Anandkumar, Anima
Certified robustness is a desirable property for deep neural networks in safety-critical applications, and popular training algorithms can certify robustness of a neural network by computing a global bound on its Lipschitz constant. However, such a bound is often loose: it tends to over-regularize the neural network and degrade its natural accuracy. A tighter Lipschitz bound may provide a better tradeoff between natural and certified accuracy, but is generally hard to compute exactly due to non-convexity of the network. In this work, we propose an efficient and trainable \emph{local} Lipschitz upper bound by considering the interactions between activation functions (e.g. ReLU) and weight matrices. Specifically, when computing the induced norm of a weight matrix, we eliminate the corresponding rows and columns where the activation function is guaranteed to be a constant in the neighborhood of each given data point, which provides a provably tighter bound than the global Lipschitz constant of the neural network. Our method can be used as a plug-in module to tighten the Lipschitz bound in many certifiable training algorithms. Furthermore, we propose to clip activation functions (e.g., ReLU and MaxMin) with a learnable upper threshold and a sparsity loss to assist the network to achieve an even tighter local Lipschitz bound. Experimentally, we show that our method consistently outperforms state-of-the-art methods in both clean and certified accuracy on MNIST, CIFAR-10 and TinyImageNet datasets with various network architectures.
- Asia > Middle East > Jordan (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)